Previous Blogs

December 8, 2021
AWS Rewrite Rules for Private 5G with Latest Offering

November 17, 2021
Qualcomm Expands its View to the Connected Edge

November 3, 2021
Microsoft, Nvidia Highlight the Practical Metaverse

October 27, 2021
Intel Highlights Benefits of Software Optimized Silicon

October 12, 2021
IBM Brings Weather Data and AI to Help with Sustainability Goals

September 28, 2021
Amazon’s Astro Brings a Personal Robot into Your Home

September 22, 2021
New Microsoft Surface and HP PCs Demonstrate Ongoing PC Innovation

September 15, 2021
Arm Working to Enable the Software-Defined Car

August 19, 2021
Intel Lays Out Multi Chip Architecture Plans

August 11, 2021
Samsung’s Latest Foldables Show Benefits of Refinements

July 21, 2021
Amazon Drives Ambient Computing Forward with Alexa Enhancements

July 14, 2021
Microsoft’s Windows 365 Brings Cloud PCs to Life

June 29, 2021
MWC News Shows 5G Focus Shifting to Infrastructure

June 22, 2021
Global Foundries Fab Expansion Reveals New Strategy

June 16, 2021
Videoconferencing Challenge Looming

June 8, 2021
Cisco Extends Webex to Suite of Offerings

June 2, 2021
Computex News from AMD, Intel and Nvidia Demonstrates Strength of PC Suppliers

May 18, 2021
IBM Simplifies Automation with Watson Orchestrate

May 11, 2021
IBM Simplifies Automation with Watson Orchestrate

May 5, 2021
Dell’s APEX Brings Hardware as a Service to the Mainstream

April 28, 2021
Arm Brings New Compute Options from the Cloud to the Edge

April 21, 2021
Apple Announcements Accelerate Custom Chip Transition

April 13, 2021
Nvidia Steps Up Enterprise and Automotive Efforts with GTC Announcements

April 6, 2021
AWS and Verizon Bring Private 5G and Edge Computing to Life with Corning

March 31, 2021
Cisco Wants to Make Hybrid Work Actually Work

March 23, 2021
Intel Reinvigorates Manufacturing Strategy with IDM 2.0

March 16, 2021
AMD Refocuses on Business with Latest Epyc and Ryzen Pro Launches

March 9, 2021
GlobalFoundries and Bosch Emphasize Shift in Automotive Semis

March 2, 2021
Microsoft Brings AI Appliances and Improved Connectivity to IoT

February 23, 2021
Cybersecurity Deal Highlights Benefits of 5G and AI in PCs

February 16, 2021
Will Conference Rooms Help or Hurt in the Return to Work?

February 9, 2021
The Ever-Present Need for Simplicity in Tech

February 2, 2021
Poly Makes Videoconferencing Personal

January 26, 2021
2021 Shaping Up to Be Big Year for Automotive Tech

January 12, 2021
What CES 2021 Says About Our Future

January 5, 2021
Big Tech Trends for 2021 Are Hybridization and Customization

2020 Blogs

2019 Blogs

2018 Blogs

2017 Blogs

2016 Blogs

2015 Blogs

2014 Blogs

2013 Blogs


















TECHnalysis Research Blog

December 14, 2021
Edge Computing Advances Limited Without Standards

By Bob O'Donnell

There’s little doubt that one of the most exciting and important topics in the tech world is the ongoing development and advancement of edge computing. That is, if you can figure out what “the edge” actually is.

One of the big problems is that nobody really seems to be able to do that in a concise and consistent manner. Nearly every tech vendor and industry prognosticator, it seems, has their own view of what the “edge” and therefore, edge computing, really is. This is understandable, in part, because there are legitimate cases to be made for how far the edge extends away from the core network, making it reasonable to talk about things like the near edge, the far edge, etc.

What does seem consistent through all of these definitional discussions, however, is that edge computing is a new form of distributed computing, where compute resources are scattered over many different locations. Modern microservice-based, containerized software architectures fit nicely into this world of dispersed, but connected, intelligence.

The other point that seems relatively consistent across the many different versions and definitions of edge computing is that the available resources that can be tapped into at the “edge” are significantly more varied than what has been available in the past. Sure, there will be lots of powerful x86 CPUs—in fact, even more choices than before, given the significant impact that AMD has made and the rejuvenated competitiveness this challenge has brought to Intel—but many other options as well. Arm-powered CPUs from major cloud vendors, like the latest Graviton 3 from AWS, and new server CPU options from companies like Ampere, are becoming popular options too. Some have even suggested Arm-powered processors could become dominant in power-sensitive “far edge” applications like 5G cell towers for MEC (mobile edge compute) implementations.

Of course, GPUs from Nvidia and AMD, along with an enormous range of dedicated AI processors from a whole host of both established and startup silicon companies, are also starting to make their presence felt in distributed computing environments, adding to the range of new computing resources available.

As powerful as this concept of seemingly unlimited computing resources may be, however, it does raise a significant, practical question. How exactly can developers build applications for the edge when they don’t necessarily know what resources will be available at the various locations in which their code will run?

Cloud computing enthusiasts may quickly point out that a somewhat related version of this same dilemma faced cloud developers in the past, and they developed technologies for software abstraction that essentially relieved software engineers of this burden. However, most cloud computing environments had a much smaller range of potential computing resources. Edge computing environments, on the other hand, won’t only offer more choices, but also different options across related sites (such as all the towers in a cellular network). The end result will likely be one of the most heterogeneous targets for software applications that has ever existed.

Companies like Intel are working to solve some of the heterogeneity issues with software frameworks like its One API standard. One API is Intel’s effort to create tools that will let people write code which will smartly leverage the different capabilities of chips like CPUs, GPUs, FPGAs, AI accelerators and more without needing to learn how to write software for each of them individually. Clearly, it’s a step in the right direction. However, it still doesn’t solve the bigger issue, because it’s only designed for Intel’s chips.

What seems to be missing are two key standards that can help define and extend the range of edge computing. First, there needs to be a standardized way to query what resources are available—including chip and network types, capacity, network throughput, latency, etc.—and a standard protocol or messaging method for returning the results of that query. Second, there needs to be a standard mechanism for interpreting those results and then either dynamically adjusting the application or providing the right kind of hardware abstraction layer that would allow the software to run on whatever type of distributed computing environment it finds itself in. By putting these two capabilities together, you could greatly enhance the ability to create a usable and sharable distributed computing environment.

These are non-trivial tasks, however, and they would take a great deal of industry cooperation and discussion to create. Nevertheless, they seem essential if we don’t want edge computing to disintegrate into a convoluted mire of incompatible platforms. One possible option is the development of a higher-level “meta” platform through which diverse types of hardware and software could communicate and coexist. To be clear, I am not referring to a “metaverse” but rather a higher order software layer. (At the same time, creating a metaverse-style digital world would undoubtedly require the unification or at least standardization of different edge computing concepts in order to at least provide a consistent means of visualizing such a world across different devices.) In the same way that internet standards like IP and HTTPS provide a common way to present information, this metaplatform could potentially offer a common means of computing information across an intelligently connected but highly distributed set of resources.

Admittedly, some of these discussions may be a bit too theoretical to bring to life soon. Still, for edge computing to move beyond the interesting concept stage to the realm of compelling experience, at least a few of them need to be addressed. If not, I’m concerned the real-world complexities of trying to integrate a highly diverse set of computing resources into a useful, powerful tool capable of running an exciting set of new applications could quickly become overwhelming. And that would be a real shame.

Here’s a link to the original column: https://www.linkedin.com/pulse/edge-computing-advances-limited-without-standards-bob-o-donnell

Bob O’Donnell is the president and chief analyst of TECHnalysis Research, LLC a market research firm that provides strategic consulting and market research services to the technology industry and professional financial community. You can follow him on Twitter @bobodtech.

 

b